A Modified Orthant-Wise Limited Memory Quasi-Newton Method

ثبت نشده
چکیده

where U = V k−mV k−m+1 · · ·V k−1. For the L-BFGS, we need not explicitly store the approximated inverse Hessian matrix. Instead, we only require matrix-vector multiplications at each iteration, which can be implemented by a twoloop recursion with a time complexity of O(mn) (Jorge & Stephen, 1999). Thus, we only store 2m vectors of length n: sk−1, sk−2, · · · , sk−m and yk−1,yk−2, · · · ,yk−m with a storage complexity of O(mn), which is very useful when n is large. In practice, L-BFGS updates Hk−m as μI , where μ = (s)y/∥y∥.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Modified Orthant-Wise Limited Memory Quasi-Newton Method with Convergence Analysis

The Orthant-Wise Limited memory QuasiNewton (OWL-QN) method has been demonstrated to be very effective in solving the l1regularized sparse learning problem. OWL-QN extends the L-BFGS from solving unconstrained smooth optimization problems to l1-regularized (non-smooth) sparse learning problems. At each iteration, OWL-QN does not involve any l1regularized quadratic optimization subproblem and on...

متن کامل

Newton-Like Methods for Sparse Inverse Covariance Estimation

We propose two classes of second-order optimization methods for solving the sparse inverse covariance estimation problem. The first approach, which we call the Newton-LASSO method, minimizes a piecewise quadratic model of the objective function at every iteration to generate a step. We employ the fast iterative shrinkage thresholding method (FISTA) to solve this subproblem. The second approach,...

متن کامل

Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm

An optimization algorithm for minimizing a smooth function over a convex set is described. Each iteration of the method computes a descent direction by minimizing, over the original constraints, a diagonal plus lowrank quadratic approximation to the function. The quadratic approximation is constructed using a limited-memory quasi-Newton update. The method is suitable for large-scale problems wh...

متن کامل

Training L1-Regularized Models with Orthant-Wise Passive Descent Algorithms

The `1-regularized sparse model has been popular in machine learning society. The orthant-wise quasi-Newton (OWL-QN) method is a representative fast algorithm for training the model. However, the proof of the convergence has been pointed out to be incorrect by multiple sources, and up until now, its convergence has not been proved at all. In this paper, we propose a stochastic OWL-QN method for...

متن کامل

A limited memory adaptive trust-region approach for large-scale unconstrained optimization

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015